16 research outputs found

    Diffusion Maps for Group-Invariant Manifolds

    Full text link
    In this article, we consider the manifold learning problem when the data set is invariant under the action of a compact Lie group KK. Our approach consists in augmenting the data-induced graph Laplacian by integrating over orbits under the action of KK of the existing data points. We prove that this KK-invariant Laplacian operator LL can be diagonalized by using the unitary irreducible representation matrices of KK, and we provide an explicit formula for computing the eigenvalues and eigenvectors of LL. Moreover, we show that the normalized Laplacian operator LNL_N converges to the Laplace-Beltrami operator of the data manifold with an improved convergence rate, where the improvement grows with the dimension of the symmetry group KK. This work extends the steerable graph Laplacian framework of Landa and Shkolnisky from the case of SO(2)\operatorname{SO}(2) to arbitrary compact Lie groups

    A clever elimination strategy for efficient minimal solvers

    Full text link
    We present a new insight into the systematic generation of minimal solvers in computer vision, which leads to smaller and faster solvers. Many minimal problem formulations are coupled sets of linear and polynomial equations where image measurements enter the linear equations only. We show that it is useful to solve such systems by first eliminating all the unknowns that do not appear in the linear equations and then extending solutions to the rest of unknowns. This can be generalized to fully non-linear systems by linearization via lifting. We demonstrate that this approach leads to more efficient solvers in three problems of partially calibrated relative camera pose computation with unknown focal length and/or radial distortion. Our approach also generates new interesting constraints on the fundamental matrices of partially calibrated cameras, which were not known before.Comment: 13 pages, 7 figure

    Moment Estimation for Nonparametric Mixture Models Through Implicit Tensor Decomposition

    Full text link
    We present an alternating least squares type numerical optimization scheme to estimate conditionally-independent mixture models in Rn\mathbb{R}^n, without parameterizing the distributions. Following the method of moments, we tackle an incomplete tensor decomposition problem to learn the mixing weights and componentwise means. Then we compute the cumulative distribution functions, higher moments and other statistics of the component distributions through linear solves. Crucially for computations in high dimensions, the steep costs associated with high-order tensors are evaded, via the development of efficient tensor-free operations. Numerical experiments demonstrate the competitive performance of the algorithm, and its applicability to many models and applications. Furthermore we provide theoretical analyses, establishing identifiability from low-order moments of the mixture and guaranteeing local linear convergence of the ALS algorithm

    The effect of smooth parametrizations on nonconvex optimization landscapes

    Full text link
    We develop new tools to study landscapes in nonconvex optimization. Given one optimization problem, we pair it with another by smoothly parametrizing the domain. This is either for practical purposes (e.g., to use smooth optimization algorithms with good guarantees) or for theoretical purposes (e.g., to reveal that the landscape satisfies a strict saddle property). In both cases, the central question is: how do the landscapes of the two problems relate? More precisely: how do desirable points such as local minima and critical points in one problem relate to those in the other problem? A key finding in this paper is that these relations are often determined by the parametrization itself, and are almost entirely independent of the cost function. Accordingly, we introduce a general framework to study parametrizations by their effect on landscapes. The framework enables us to obtain new guarantees for an array of problems, some of which were previously treated on a case-by-case basis in the literature. Applications include: optimizing low-rank matrices and tensors through factorizations; solving semidefinite programs via the Burer-Monteiro approach; training neural networks by optimizing their weights and biases; and quotienting out symmetries.Comment: Substantially reorganized the paper to make the main results and examples more prominen

    3D ab initio modeling in cryo-EM by autocorrelation analysis

    Full text link
    Single-Particle Reconstruction (SPR) in Cryo-Electron Microscopy (cryo-EM) is the task of estimating the 3D structure of a molecule from a set of noisy 2D projections, taken from unknown viewing directions. Many algorithms for SPR start from an initial reference molecule, and alternate between refining the estimated viewing angles given the molecule, and refining the molecule given the viewing angles. This scheme is called iterative refinement. Reliance on an initial, user-chosen reference introduces model bias, and poor initialization can lead to slow convergence. Furthermore, since no ground truth is available for an unsolved molecule, it is difficult to validate the obtained results. This creates the need for high quality ab initio models that can be quickly obtained from experimental data with minimal priors, and which can also be used for validation. We propose a procedure to obtain such an ab initio model directly from raw data using Kam's autocorrelation method. Kam's method has been known since 1980, but it leads to an underdetermined system, with missing orthogonal matrices. Until now, this system has been solved only for special cases, such as highly symmetric molecules or molecules for which a homologous structure was already available. In this paper, we show that knowledge of just two clean projections is sufficient to guarantee a unique solution to the system. This system is solved by an optimization-based heuristic. For the first time, we are then able to obtain a low-resolution ab initio model of an asymmetric molecule directly from raw data, without 2D class averaging and without tilting. Numerical results are presented on both synthetic and experimental data

    Moment Varieties for Mixtures of Products

    Full text link
    The setting of this article is nonparametric algebraic statistics. We study moment varieties of conditionally independent mixture distributions on Rn\mathbb{R}^n. These are the secant varieties of toric varieties that express independence in terms of univariate moments. Our results revolve around the dimensions and defining polynomials of these varieties.Comment: 14 page
    corecore